Featured

BPS Psychology Research Day keynote: The Impact of Psychological Research

I was delighted to be invited to deliver the keynote at the BPS Psychology Research day on 9th November 2023. Focused (unsurprisingly!) on research impact, in this talk I covered what research is, what impact is (and isn’t), gave some examples of impact from psychology research and talked about principles of practice.

It’s always a privilege to be involved in psychology focused events (I also very much enjoyed being part of the Division of Health Psychology panel on Tuesday), and I hope the talk helped showcase the enormous value psychology research brings.

A copy of the slides for anyone who wants them are here:

Featured

New patient-centred outcome measures for venous thromboembolism

Some of you will know about my blood clot history. Potted version – a larger-than-should-be-fair deep vein thrombosis after having a baby in 2008, unsuccessful vein bypass surgery in 2010, a stupid and unexpected clot in 2016, a few other clots here and there until TA-DA I’m fixed by venous stents in 2018.

Whilst the whole clot thing is literally a complete pain, I’ve been enormously lucky to be able to use the experience in a positive way after years of it being pretty horrid. I’ve now been involved in various activities as a patient advocate for a while, including being part of an International Consortium for Health Outcomes Measurement (ICHOM) working group to develop a standard set of patient-centred outcome measures for Venous Thromboembolism (VTE).

The experience of this has been fascinating, partly as a patient, partly as a health psych and partly as an impact person. In a nutshell, the process has involved a series of meetings over a number of months, where clinical experts in VTE and a number of patients living with venous conditions review, assess and vote on what outcomes “matter most to people (≥16 years old) with Pulmonary Embolism, Deep Vein Thrombosis, and other related conditions“. The first thing that struck throughout this was the equal weighting with which my voice as a patient was included in decision making. The ICHOM team and the clinicians involved really did try to ensure the patient voice was brought to the fore in all conversations. The second thing was the way the process effectively combined rigour with democracy, giving open platforms to discuss issues which were subsequently fed into group wide votes. Thirdly, whilst I can’t pretend to have always understood the clinical terminology, care was taken to clarify in the meetings or offer additional discussions to explain things further. I know I have an advantage as a patient already being ‘in health and research’ as it were, but that notwithstanding at no point did I feel my inclusion was tokenistic, rushed or glossed over. Quite the opposite.

The final ICHOM VTE outcome set has now been publicly launched and is free to access. It includes measures across four categories – Patient-reported outcomes, long term consequences of disease, complications and treatment related complications – covering the experiences of care, with what it’s like to live with venous conditions. The site provides a series of support materials, such as reference guides, and is a new contribution to the existing set of 40 (and growing) outcome measure sets for other conditions.

My hope is that this work will herald a new way of supporting patients with VTE, combining clinical excellence with patient experiences. Venous disorders can absolutely wipe quality of life from under your feet, but with a more values-led, comprehensive and standardised set of measures, over time we might just be able to make life a bit better. I’m enormously proud to have been part of this, and my huge and personal thanks go to the Chairs – F.A. (Erik) Klok (Leiden University Medical Center) and Stephen Black (King’s College London), the ICHOM project team and the working group members who worked so hard to get this right for VTE patients across the world.

ICHOM VTE Standard Set – Image copied – https://connect.ichom.org/patient-centered-outcome-measures/venous-thromboembolism/
Featured

DHP session: Impact, Health Psychology and you.

This post accompanies a Division of Health Psychology BREATHE pre-conference workshop, June 2021

I have always felt immensely lucky to call myself a Health Psychologist. I mean, like legitimately, not some kind of niche fancy dress situation. Anyway, one of the things that has always kept me gravitationally pulled to health psychology (HP), even as my career has headed impact-wards, is the core premise of ‘making a difference’. Be it through research, practice, teaching or any aspect of the breadth of work HP covers, at its heart HP is about working out better, stronger and fairer ways to support people make positive changes. I suspect many in the profession are in it for much the same reason.

I was delighted to be invited to deliver a workshop for the Division of Health Psychology 2021 annual conference. DHP is my academic home, but one I’ve probably wandered away from a little too long doing this impact thing. It’s been a scenic route, and it’s great to be back in the fold.

The aim of this session was threefold: to cover what impact is (and isn’t), to look at it through the lens of HP, and help people find themselves in this thing called impact. Moreover I suppose I wanted to break down some of the confusion, myths and frustrations around impact, and give people space and time to look at how it fits meaningfully, appropriately and authentically within their work.

My slides and a bunch of references and resources are below. Enjoy!

UPDATE: Some responses to questions raised in the session now here.

Slides

Featured

Next steps for REF? We need to repair the sector’s health first

This post accompanies a talk at the Westminster Forum Projects | Next steps for the REF – independence and positive research environments, delivering and measuring impact, and the future of open access event, 23/3/21. Slides available here


A while ago I was invited to speak at the Westminster Forum in a panel session entitled “Research environments in the REF – stimulating positive cultures and wellbeing, academic independence and interdisciplinary research“. When I first accepted the invitation we were pre-COVID, some time ahead of the REF submission and the prospect of talking about ‘next steps’ seemed eminently sensible. However, with the rescheduled conference now clashing with the final throes of REF (no criticism, simply an artefact of REF date extensions + challenges of arranging a conference in the midst of a pandemic), I find my mindset has changed. Not that we shouldn’t think about next steps, but because if we don’t take stock of the damage across the sector first, we can never really reach a point of wellbeing.

Before I start, it’s important to note that we shouldn’t pretend that REF can be blamed for everything – that would be an immensely simplistic and scapegoating way to assign all the ills of the sector – but with an intentional focus here on REF and impact, it’s essential that we acknowledge the collateral damage felt by so many. REF is undoubtedly a double edged sword, certainly for impact; it drove the need for jobs in this space (my own included) and legitimised those working in more ‘applied’ fields, yet simultaneously formalised and scrutinised impact to an arguably harmful level. Impact has been, to a very large extent, conflated with REF, and whilst the broader impact pilot light hasn’t gone out, impact strategies are now immeasurably flavoured by anxieties about ‘what counts’ and ‘what’s biggest’. We talk about impact as a whole, yet screen out the weaker chaff from the stronger wheat to maximise our chances of income. Whilst that seems an enormously sensible strategy for an institution under assessment, it takes no account of the damage and disenfranchisement of those not picked for the Case Study team. Impact is for everyone. Go do impact. No not that way, that’s not enough. Move aside for those doing better stuff. As much as we’d like to pretend we don’t, we still trade off impact star players for our cases with no recognition of how many others were put on the subs bench.

REF has introduced terms into our academic lexicon we will struggle to unlearn. Outputs, impacts and people are appraised in terms of how ‘REF’able‘ they are. Evidence has – much to the chagrin of my international counterparts – become both a verb (‘can it be evidenced?’) and noun (‘we need the evidence’). Yet its language legacy is not matched by sustained capacity or expertise. A 2020 survey led by ARMA showed that 58% impact personnel were on short term contracts, with 72% contracts finishing at the end of REF. 72%. We grew an army of people to deliver REF impact, now or soon to be disbanded, with those left burned out and wondering how to re-energise a tired and distrusting sector.

I talk routinely about the need for impact literacy (the understanding of impact) and institutional health (the infrastructure needed to support healthy practices). However these these need to take a temporary backseat before thinking about ‘next steps’ whilst we recognise how the sector is feeling. I’m aware that focusing on ‘feelings’ may appear to be a superficial and transient indulgence given sectoral pressures to obtain ever more reduced funds, but if we don’t genuinely take stock and understand why such committed people are so burnt out, so despondent, we will not only lose vital knowledge and skills, but also irrevocably stain the relationships between academia and society.

The sector is not well

Ahead of the talk I reached out to colleagues and was saddened, yet not at all surprised by their level of despondency. Within impact, people who have fought so hard over the years to drive a positive impact culture, now exhausted and planning to leave their job or even the sector. Tired of the narrowing of impact to page length and font compliance. Exhausted by the discord in rhetoric between ‘impact matters’ and ‘only if it’s big’, and disillusioned by the tensions arising from conflicting rules and disparity between weightings for impact and the underlying environment. It says it all that when I asked them for images to illustrate REF, I received pictures of burning buildings and frayed rope. I also reached out more widely to colleagues in the academic community* to invite comments on ‘next steps’ for REF, and was inundated with stories of demotivation, damage and despondency. There’s no way to do justice to the extent or depth of these issues, but are perhaps best encapsulated by one comment that “the damage done perpetuates many harms and maintains toxic working practices”. Issues include:

  • Inequalities cemented and deepened; those with capacity to work longer hours, travel, physically well and with no care responsibilities are more able to meet REF-related progression criteria and thus ‘climb the ladder’. Those who can’t, including part time academics, disproportionately struggle
  • Academic methodologists and non-research staff made invisible, their work pivotal for, but omitted from accounts of impact glory.
  • Anxieties related to rule interpretation, risks of accidental non-compliance, second guessing reviewer expectations, seeking to perfect cases without knowing what ‘perfect’ looks like, and marrying authenticity of accounts within rules and template space.
  • The making of an unrelenting engine; Excessive administrative burden, substantial time demands beyond standard workload, continual internal deadlines, multiple iterations of cases and review points, excessive process time and energy, all of which prevent full consideration of the consequences of decisions taken.
  • Disciplinary disprivilege; Despite recognition of subject-based differences in the relationship between research and impact relationship, certain kinds of research/impact remain privileged by the exercise (eg ICS template unsuitable for more iterative participatory or practice based research)
  • Disillusionment; early optimism that social engagement would be valued (alongside outputs) swiftly replaced with despondency over requirements to instrumentalise research and commodify partnerships
  • Pausing rather than promoting research; Instructions to intentionally delay publication when there’s already ‘enough’ for REF and wait for the next cycle.
  • Bullying, harassment and damage to mental health, limited support (worsened by COVID). Stories of REF being used to “threaten, control, shame and otherwise exploit workers”, with people made to feel inadequate or a “failure” if their work isn’t included.
  • Contractual precarity and employment barriers; Short term contracts, teaching-only contracts, blocks on appointments or roles extended only so long as to complete a case study
  • Short termist REF framed approaches: institutional strategy scheduled in REF cycles, with research value conflated with its value within assessment
  • Overall: The efforts of trying to manage, negotiate and de-toxify these issues

Beyond the need to address these fundamental problems, colleagues also called for:

  • Practical necessities; clear and non-contradictory assessment guidance needed sooner, reduced scale of bureaucracy to learn
  • Extending focus; on team science, including those not on research contracts (techs etc)
  • Fuelling positive research culture not just assessing research environment
  • Embedding meaningful approaches to and measures of EDI
  • True recognition of interdisciplinarity
  • Support for early non-academic engagement without expectation of a specific return
  • Focus on systemic inequity, with resources focused on coaching and support
  • Recognition of the consequences of midstream funding cuts (eg. ODA projects)

I hear fairly routinely the phrase ‘keep going, nearly there’ at the moment (ie. ahead of the 31st March deadline, just over a week and counting), and have done for months. Positioning REF as some kind of endurance race with an inevitable sense of relief and doubtless a celebration event or two. Doubtless this motivational chant is meant well, and for many is an accurate homily, but this belies the deep scars and potentially undoable damage for many. Are we really upholding the principles of social good by wearing down the people who fuel its development? The academics whose knowledge underpins change. The impact specialists and research managers who sit alongside, intermediating between a drive for social change and compliance with assessment rules. Disregarding the real-world effects on colleagues tasked to make real world impact? Is there genuinely a belief that assessment doesn’t change impact behaviour? Impact cannot just be positioned as academic duty, nor having ‘no impact’ considered some sort of defiance of sector expectations. We’ve traded too long off the motivation of people want to make a difference, but the personal toll doing that whilst meeting requirements for every other academic monolith is just too high.

The need to repair

It would be of course overly idealistic, and arguably impractical to simply stop assessments, particularly as they do offer at least a scripted and largely transparent process to allocate public funds. It is similarly simplistic to blame university management when there are many examples of supportive and inclusive practice. There have always been philosophical debates about ‘what counts’ and what is ‘excellent’, particularly across disciplines, so a one size fits all approach cannot fit everyone, nor am I advocating an oversimplified alternative. There are noises that the future won’t simply be REF mark 3, but actually look to address some fundamental dilemmas about how we assess research. That is an immensely welcome prospect if true. But to what extent is there really going to be flex in a system ultimately reportable to Treasury? Reducing meaningful sector engagements into comparative and scorable scenarios, with results not upgradable for 7 years, is a continuingly troubling pressure on an already exhausted sector

The equation that gets us to a healthier position must include new variables. Thus far there has been dangerously little consideration of the resource burden on universities and the toll on people, with rhetoric idyllically expectant that universities can just ‘cream off’ the best examples of impact. However, this misses several fundamental points.

Firstly, the rule book(s) for REF runs to hundreds of pages, across multiple documents and weaved into multiple FAQs. Even where universities can ‘cream off’ the best cases, the necessary checks and balances requires people to develop an expert level, legalesque memory of specific points of guidance, where to find it, and to what extent it is mandatory (vs. open to interpretation). By way of clear illustration of complexity, Dr Anthony Atkin (University of Reading) recently mapped the multiple checkpoints needed to determine a single point on eligibility:

The Spider’s Web of REF impact rules. Dr Anthony Atkin (ARMA Protagonist Winter 2019)

Secondly, particularly for the smaller universities there is not simply a ‘pool’ of strong cases to draw from. If we need two cases, we have to create two cases and often cannibalise resources from elsewhere to do so. Rather than cast for the biggest impact fish, we have to set in motion a full engine of activity to get membership to a suitable pool. The capacity burden on institutions where research – or departments – are much more newly instated is far in excess of that needed for longstanding, socially partnered and challenge led initiatives` already underway.

Thirdly, assessment, or more specifically the curated, sanitised, and positivist cases created for submission, creates a false sense of dyadism between knowledge and application. Research does not simply catalyse into impact. There has been a tendency since 2014 to use the Impact Case Study database as an exemplar dataset, displaying countable effects on policy, society, the economy and more. But these obfuscate what doesn’t work, how much effort is wasted or otherwise screened out of the final story. The sector becomes held against an unhealthy benchmark of achievement, in much the same way that photoshopped celebrities drive an unhealthy view of ‘what’s beautiful’.

For many of us whose roles extend beyond REF, the task ahead of us is immense. Patching the wounds of this REF, disconnecting the now conditioned response between meaningful impact and evidential compliance, and doing so as our own attitudes to impact are at best diluted. A post REF future must recognise the ghosts of REF past. ‘Next steps’ cannot presume either a blank canvas or a sector somehow warmed by their achievements thus far. We need impact literacy. We need institutional health. We need to remember what impact is truly about and mentally and practically unbind it from REF.

The sector is reeling in so many ways, and there’s no way to do justice to the issues in a single post. But I do know this….

We need a break. We need to learn from the past. And we need to repair.

*with special thanks to WIASN for offering such important and candid commentary.

Featured

Where the Pathway ends: taking impact off-road

The original version of this article was first published in Research Professional’s Funding Insight service” on 6th February 2020

So that’s it. On 26 January the government confirmed its intention to cut impact sections from grant applications. RIP Pathways to Impact, then. As we move swiftly through the five stages of collective grief (although according to my Twitter feed many have rapidly bypassed denial and anger and jumped ecstatically to acceptance) we are left wondering what a less tokenistic and administratively lighter impact-afterlife looks like.

Since UK Research and Innovation’s announcement, we have had a series of comprehensive and thoughtful responses from, for example, James Wilsdon, Kieran Fenby-Hulse, Research Impact Canada, the London School of Economics, and the Institute for Development Studies. These and others have summarised many of the key reflections and questioned if impact is still alive (spoiler: yes). Notwithstanding the nuanced commentary of each, they broadly concur on three main things:

  1. Impact pathways were reductionist and flawed, but did offer a leverage point to plan engagement and routes for research implementation.
  2. The problem wasn’t just in Pathways to Impact, but in pursuing impact within a complex and unbalanced ecosystem.
  3. Removal of Pathways to Impact both reflects, and provides opportunities for, a more impact-mature sector, but we’re far from being fully impact-literate yet.

The last decade has witnessed a significant growth in impact knowledge, capacity and expertise. Impact now routinely forms a key part of research office function, and impact specialism is a far more established area of professional practice. While arguably in the UK this has much to do with Research Excellence Framework-related investment (and, frankly, REF-related anxieties), impact expertise is now diffused across the research system in specialist roles and support infrastructure. Research managers are more routinely involved in impact throughout the research lifecycle, but the experience of supporting impact on the ground suggests we should approach the post-Pathway brave new world with caution.

Thinking ahead

Pathways and REF Impact Case Studies have always been, in a conceptual but practically untidy way, opposite ends of an impact spectrum. Research implementation is a complicated business, and Pathways was often one of the few points of contact to support  researchers’ thinking about implementation realities. If speculation is correct, the Pathways to Impact will be replaced with a more combined research-with-impact case for support, an increased importance of logic models and raised expectations for impact to be embedded more strongly in institutional strategy.

If this reinforces the need for researchers and research institutions to review why, how, if and when research can contribute to socially meaningful goals—including challenges and risks—then we’ve stepped forward. However, if this presumes project-level planning is unnecessary, or magnifies existing system biases around institutional ‘high achievers’ or impact being a natural consequence of excellent research, then we really haven’t learnt much at all.

While UKRI’s decision seems to herald recognition of impact achievements thus far, the suggestion that the sector is now sufficiently impact-literate to lose Pathways without ramification is concerning. There are of course many examples of impact excellence and impact-related skills are much more prevalent than at the inception of Pathways. However, sparkly stories of impact achievement belie the patchwork nature of knowledge, engagement and support.

The need for healthy connections

Impact is, and has always been, more than a pathway document or a case study. It is, at its heart, a way to honour the university’s role within the society. Universities have other ways of doing this, for example at the University of Lincoln there is an ongoing drive to support our region as a Civic University, and to act as a “Permeable” university to break down barriers with wider society across all university functions.

Impact, however, has too often been unhealthily segmented away from core business, and the separation of impact within a separate Pathways section was indicative of this. Systemically we invest more in impact because we’re assessed more on it. We produce great stories of impact because the small stories don’t win financial rosettes. We partition the component parts of people’s roles into measurable chunks to make assessment practicable. And the sector’s memory for impact is undermined by the short-termism of professional impact roles and their REF-tied end dates.

The announcement does not and should not signify a downturn in the impact agenda, but instead should act as a catalyst for more comprehensive and less siloed approaches.

Next steps

The question really is what’s next? Will presumptions of sector maturity divert us from the development still needed? Will there be investment which drives impact in all its shapes and sizes (not just the shiny unicorn type)? Can we build an ecosystem which actually helps drive and ensure skilled judgment of meaningful impact? And in the midst of all these questions we need to remember that there are many other funders besides Research Councils for whom impact plans remain an important part of the application process.

Whether you’re overjoyed about no longer having to ‘pathway’ research impact, or concerned about the incoming impact-replacement service, March 2020 symbolises change. We have many years of experience, and extensive expertise to draw on to ensure that the promise of societal impact from research is fulfilled. Whatever the Pathways to Impact afterlife looks like, let’s get it right.

 

Featured

Chronic (sector) health and getting back our mojo

I’ve taken a step back in recent times from Twitter. Well social media in general to be honest. It felt like I needed to, but I couldn’t at the time articulate why. I have, for the large part of late 2018 and early 2019 been fairly unwell, so that’s probably the main issue. The stents have worked, but the nerve pain is new and that’s by definition more distracting than a well-practised pain with a 10 year heritage. Add to that a number of sick bugs from school (thanks kids) and basically I’m differently wonky with a hint of nausea. Anyway the thing you become aware of with any chronic health issue is how much of you it dilutes – everything is effortful, laboured and takes a disproportionate toll on whatever you try to do.

With social media, I was – I realise – getting utterly worn out by the continual stories about bad practice within the sector. Not tired of people telling the stories (they absolutely need telling), but tired of us seemingly getting no further past a sector-eats-itself situation. Stories abound about contract changes for REF / reluctance to employ early career researchers / systematic barriers to equality and diversity (etc etc) and the continued corrosion of research(er) wellbeing in the pursuit of rankings. In short, the sector is chronically unwell.

We seem to continue to find new and inventive ways to eat our young and marginalise those with less ranking ‘currency’. We’re increasingly legitimising universities as the sole dominion of research  (category A anyone?) and continuing to deify metrics despite epiphanies about responsible practice. We have re-paradigmed research through our various ratings system such that only dramatic step changes in knowledge (4* anyone?) are ordained at the altar of worthiness, and the peripheralisation of ‘smaller’ research, ‘lower level’ outputs and ‘limited effects’ is leaving so many in the sector feeling  overwhelmed, overlooked and undervalued.

This week I heard news of significant redundancies in my previous institution. Whilst I don’t know the details (nor the strategy on which the decision is based), I do know that as in so many other examples, good people are feeling betrayed. We all know there’s no Elysian Fields in which everyone gets funded and impact never dies, but for many, Dante’s inferno would be a more adequate metaphor. Where loyalty is penalised and territorialism rewarded. Where overwork is perversely incentivised and wellbeing reduced to tokenistic suggestions to ‘do more exercise’. Where stress and depression are considered unfortunate but unavoidable consequences, and where positive things happen only because good people keep other good people going. I maintain that we are enormously privileged in academia to have a voice and have the opportunity to make a difference,  but I’m hearing people ask more and more if it’s worth it.  Everyone is fighting so hard – often to stand still – and whilst it’s to their absolute credit that they keep going that isn’t sustainable strategically or psychologically.

My self imposed twitter detox has – in hindsight – reflected a sense of helplessness in addressing such pervasive problems. It’s perhaps no surprise that in parallel my professional attention has shifted significantly towards un/healthy practice in all its many guises and finding ways to rebalance things.  The sector voice is loud on the problems, and it’s time to step back into the ring and pick up the fight.

Ultimately this post is my weary, reflective and hopeful call for ‘better’. In whatever way that’s needed. Not shinier or bigger, but more decent and more meaningful across the piece. We all know the research landscape is complex, but we shouldn’t need to adopt a Hunger Games strategy  just to survive.

I’m professionally in a far healthier place, and hoping to re-find my twitter mojo soon, but for now my diluted energy is focused on trying to help salve a few things. The sector diagnosis might be chronic, but we’re not at terminal stage yet and that gives me enormous hope.

*Hugs it out*

J

Featured

Sausages, unicorns and strip clubs. Or Impact: the challenge of connection

*Blog post relates to talks at PraxisAuril (October 2018) and Swansea University ((May 2019). This post summarises the talk (slides available here)*

First things first, what do we mean by research impact? If we look at various definitions underpinning funding (eg UKRI) and assessment (eg REF) they ultimately coalesce as the ‘provable change (benefit) of research in the ‘real world’. That is, effects of research which are felt beyond the academic walls. Accordingly it is measured by indicators of change outside of the university, and not by markers of academic interest or publication attention.

But let’s put in some clear caveats: there’s no one size fits all, and as a community we must as be sensitive to unscripted biases. For example, the shorthand to ‘benefits’ overlooks the perspectives by which all change is seen. What is good to one person may be bad for another. For example, reducing gambling is brilliant for society, less so for casinos. Similarly within arts and humanities, effects may be less directional and may aim towards disrupting archaism or challenging mindsets. Research which is diffused into the public arena (rather than having neatly targeted beneficiaries) will also always feel the extra weight of demonstrating change in an audience it can’t quite see. More fundamentally, the forced definitional division between academia and non-academia (‘real world’) must be used to understand where effects are felt, not to elevate or disconnect academia from its community home. So whilst definitions and shorthands are useful, they can not and should not be used as blueprints for impact irrespective of discipline or topic.

In the talk I reflect on 6 key lessons about impact:

1. We are all custodians of impact; we each have a piece of the puzzle

Impact is not the domain of one person or one part of the research landscape. Impact is a brokered, negotiated and connective art, achieved by and for people in a myriad of ways. And it’s a team game. We each have skills, perspectives, experiences, networks and ideas which can contribute to an impact cauldron of possibilities. By recognising which parts of the impact journey we can each support (as academics, research managers, KEC professionals, communicators, strategists, funders, publishers (etc) the big picture becomes far easier to see. For this we need to develop our impact literacy (download Emerald Publishing’s Impact Literacy Workbook here).

2. We often speak different languages

‘Impact’ is of course not a new word (although admittedly the tone has historically been one more akin to meteoric crises than research assessment). In recent years however impact has been catapulted into our collective consciousness as an important ‘thing’, but without necessarily a unified sense of what ‘it is’. Impact is often used both as a blanket term for the influence of an institution, and for the necessarily narrow contents of a REF case study. Without heading down deep philosophical paths about what it should be, the net result of blurred definitions is that we talk at odds thinking the other person knows what we mean. We end up accidentally pulling in different directions and watching impact potential drain from the space between us.

3. Impact case studies show the sausages, not the sausage factory

Sector wide communiques about impact (such as the REF 2014 impact database) share one key feature: they only show the wins. They don’t show the paths which didn’t play out, the contracts that weren’t signed, or the audiences that didn’t show. They neatly omit the blood sweat and tears of fighting for new partnerships only to have the company bought out at the last minute. Exalted cases are those which got through impact boot camp and found themselves presented shinily on the impact stage. If we only use these incredible examples to understand how impact works, we will never learn from what didn’t or appreciate that it’s ok for impact not to be perfect.

4. We need healthy, connected institutions

Just as we need to recognise individual contributions to impact, we need to ensure our institutions – which are invariably so complex – purse impact healthily.  We need to invest financially and culturally in impact, and focus on:

  • Commitment – The extent to which the organisation is committed to impact through strategy, systems, staff development and integrating impact into research and education processes.
  • Connectivity- The extent to which the organisational units work together, how they connect to an overall strategy, and how cohesive these connections are.  
  • Coproduction – The extent of, and quality of, engagement with non-academics for to generate impactful research and meaningful effects.
  • Competencies– The impact-related skills and expertise within the institution, development of those skills across individuals and teams, and value placed on impact-related specialisms. 
  • Clarity-How clearly staff within the institution understand impact, how impact extends beyond traditional expectations of academic research, and their role in delivering impact
  • For more on institutional health and to assess your own institution download Emerald’s Impact Institutional Health Workbook here).

5. We have a tendency to chase impact unicorns.

I’ve spoken about this before, but it’s absolutely worth saying again. The weight of expectation for impact risks mythicising high level impact to the point of meaninglessness. I’ve seen academics tearful after being rebuffed for only achieving national policy change. I’ve myself been advised to bypass work with local vulnerable communities as REF would need larger scale effects. And I’ve seen institutions plan to spend hundreds of thousands on equipment because ‘some of the four star cases had a scanner’. Whilst it’s of course challenging for institutions to balance meaning with pursuing investment for their sustainability,  we need to recognise the implications of pursuing big effects expense of meaningful smaller changes. This is always encapsulated for me by the wonderful Derek Stewart who remarks that – during his treatment for throat cancer – he also just wanted to be able to swallow. Swallow. Such a simple but meaningful change which could be so easily obscured if we only gaze at the fantastical horizon.

And finally,

6. REF, done irresponsibly, is like a strip club. Some people go in with money, some leave with money, and everyone feels a bit dirtier. I think I’ll leave that point there.

Ultimately if we want to optimise the benefits of research, we need to connect expertise and centralise meaning. So, if impact is the challenge of connection, imagine what we can do if we work together.

Acknowledgements to Dr David Phipps, Emerald Publishing, Derek Stewart, University of Lincoln, ARMA and INORMS RISE group

Featured

An impact literate approach to health psychology – notes from the DHP 2018 impact session

Thanks to all those who came to the impact literacy session at the Division of Health Psychology Conference (Friday 7th September, 2018). References to everything discussed in the talk are below.

IMPACT LITERACY AND SKILLS

Impact literacy workbook and Impact Institutional Healthcheck available at https://www.emeraldpublishing.com/resources/

Bayley, J.E. and Phipps, D. (2017) Building the concept of research impact literacy. Published online in Evidence & Policy Available online http://www.ingentaconnect.com/content/tpp/ep/pre-prints/content-ppevidpold1600027r2

Bayley, J.E, Phipps, D., Batac, M. and Stevens, E. (2017) Development and synthesis of a Knowledge Broker Competency Framework. Evidence and Policy. Available online https://doi.org/10.1332/174426417X14945838375124 (OA version: https://pure.coventry.ac.uk/ws/portalfiles/portal/7270403/PRE_REVIEW_Knowledge_Broker_competencies_for_repository_OPEN.pdf)

REF

REF 2014 impact case study database – http://impact.ref.ac.uk/CaseStudies/

REF 2021 guidelines – http://www.ref.ac.uk/publications/2018/draftguidanceonsubmissions201801.html

MODELS AND FRAMEWORKS

Buxton, M., & Hanney, S. (1996). How can payback from health services research be assessed? Journal of Health Services Research, 1(1), 35-43

Donovan, C. and Hanney, S., 2011. The ‘payback framework’explained. Research Evaluation, 20(3), pp.181-183. Available at http://jonathanstray.com/papers/PaybackFramework.pdf

Phipps, D.J., Cummings, J. Pepler, D., Craig, W. and Cardinal, S. (2015) The Co-Produced Pathway to Impact describes Knowledge Mobilization Processes . J.Community Engagement and Scholarship. See http://jces.ua.edu/the-co-produced-pathway-to-impact-describes-knowledge-mobilization-processes/

Michie, S. Atkins, L, and West, R. (2014). The Behaviour Change Wheel: A Guide to Designing Interventions. London: Silverback Publishing. See www.behaviourchangewheel.com

Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50, 179-211. Further information available at http://people.umass.edu/aizen/tpb.diag.html

Bartholomew-Eldredge, L.K., Markham, C.M., Ruiter, R.A., Kok, G. and Parcel, G.S., 2016. Planning health promotion programs: an intervention mapping approach. John Wiley & Sons. Further information at https://interventionmapping.com/

Craig, P., Dieppe, P., Macintyre, S., Michie, S., Nazareth, I., & Petticrew, M. (2008). Developing and evaluating complex interventions: the new Medical Research Council guidance. British Medical Journal, 337, a1655 Available online https://mrc.ukri.org/documents/pdf/complex-interventions-guidance/ NB UPDATED GUIDANCE WILL BE OUT IN 2019

MY BLOGS

Avoiding imposter syndrome and impact

Chasing the impact unicorn

(Impact) life beyond REF

BROADER READING AND RESOURCES

Responsible metrics: www.responsiblemetrics.co.uk

Open Access via Unpaywall add on : unpaywall.org

CASRAI (information standards) https://casrai.org/

Analysing REF case studies: https://www.kcl.ac.uk/sspp/policy-institute/publications/Analysis-of-REF-impact.pdf

London School of Economics blog http://blogs.lse.ac.uk/impactofsocialsciences/

Evidence and Policy journal  https://policypress.co.uk/journals/evidence-and-policy

Research Evaluation journal  https://academic.oup.com/rev/

 

Featured

A very impact’y INORMS 2018

And so we’ve had INORMS. What a week. Frustratingly I spent whatever time I wasn’t impact’ing limping slowly between rooms or collapsed in a heap. Thanks to all who helped out in various ways.

After the ARMA conference I routinely write a blog summary of the Impact Special Interest Group (SIG) session (see those from 2016 and 2017). However this year’s event had a different flavour. Firstly it had the glory that is David Phipps front and centre (after his fantastic plenary). Secondly it had a wonderful international dimension which broadened impact discussions and allowed us to briefly invent ‘impact tinder’…..

So instead of a SIG review, this post picks up three key headlines from talks and discussions with impact colleagues across the week:

1. There’s life beyond the ‘EFs

It’s probably fair to say that the UK impact community operates in a fairly ‘assessment-led’ context much of the time (not of course ignoring impact within the funding space).  The Research Excellence Framework (REF), especially as we get nearer to the 2020 submission date is looming ever larger, and the flurry of impact officer jobs in recent weeks perhaps pays testimony to the weight this holds for institutions. This said, of course impact is not just REF, and many colleagues – speakers and delegates alike – spoke hearteningly about meaningful connections to practice irrespective of formal requirements. Discussions about funders, REF, TEF (Teaching Excellence Framework) and the incoming KEF (Knowledge Exchange Framework) reflected balanced caution between welcoming the broadening of agendas against increasing administrative burden. Dialogue with our international counterparts who don’t have, or are yet to fully cement an assessment agenda, refreshed our minds towards research for social benefit full stop. The more we connect cross-nationally, the healthier our practices will be. The challenge is to ensure that the appetite to ‘make a difference’ – which sits so fundamentally within the impact community – is not overshadowed by powerfully selective agendas.

NB: For reference I am by no means anti-REF, and have said before I’m very thankful for the platform it’s opened up to recognise the importance of applied and translational work. My concerns are always about REF being used to disincentivise valuable ‘but not competitive’ practice (eg. bypassing local connections for more lucrative national partners),  amplifying the publish-or-perish mantra with irresponsible metrics (eg. arbitrary impact factor rules) and contractual consequences for poor performance. It is the collateral damage to research, impact, careers and wellbeing that I, like many of us, find so heartbreaking in practice.

2. Healthy contexts and connections are key.

As we all know, impact is not an effortless result of successful dissemination. Yet across the sector we still face the challenge of disrupting simple conceptualisations of impact and overturning default reliance on longstanding measures such as publication metrics. For this, individuals and institutions need to work in sync, not in conflict to embed healthy practices (institutional health slides available here). It is not enough for individuals to build their own impact literacy, as unless this is supported by healthy institutions,skills development and sector-wide messaging, good practice and good intentions will just corrode over time.

A related and continued concern is that REF within institutions is reduced to a discourse of compliance. Within the impact community we’ve had multiple anecdotes about impact officers being told to just ‘make people do impact’, ignoring the sheer scale of tailored translational effort this requires. It overlooks the skills and expertise needed to drive a REF submission, and risks treating REF managers as unskilled ‘REF monkeys’.  Quite on the contrary, managing any element of a REF submission requires extensive knowledge, partnership working, resilience and incredible organisational skills.  A compliance-led culture not only does a considerable disservice to those in these roles, it reduces buy-in by academics to the process and fundamentally undermines REF itself.  Joyfully there are many examples of healthy, connected and committed practice within institutions, where staff are valued and skills recognised.  As we scale up impact agendas internationally, it’s crucial that these healthier models form the basis of institutional practice.

3. We still have a lot of lone wolves.

Impact is a team sport. It can only happen when people work together to connect research to practice. This involves researchers, impact managers, communications specialists, information managers, stakeholders, beneficiaries and many others.  Insights into co-production, creative connections between universities and communities, and broader discourse around public trust in science remind us of both the challenges and opportunities for brokering work beyond the academic wall. However whilst I use the term ‘impact community’, it’s also very apparent that many of colleagues still work in isolation. These lone wolves often shoulder the weight of impact delivery across an department or even institution, and can feel disconnected from peers. Cross-institutional connections, improved alignment of teams (not just additional committees) within the institution and a broader programme of training and development must be central moving forward.

Finally it remains a huge privilege for me to not only be a part of, but able to champion the impact community. It’s incredibly easy to extol the virtues of not only those in the UK,  but also our global peers when the commitment to driving benefits is so clear to see. Of course this short blog post can’t reflect the depth of discussions about balancing accountability for public monies with academic freedom, nor can it capture the wealth of discussions held during INORMS itself.  But it does bear witness to the investment of thinking, time and skills by so many in the sector to drive research meaningfully into practice. And I don’t know about you but that fills me with optimism for the future.

INORMS 2020 is in Hiroshima; imagine how far our collective approach will have got us by then. *Smile*.

Slides from the SIG are available here and the Impact Literacy and Institutional Impact Health Workbooks are available here.

Particular thanks to Anthony Atkin for his gazelle-like microphone management; Laura, Tony, Vicky, Harriet and John from Emerald for continued support and not punching me when I get so impact-exciteable; David Phipps, Jo Edwards, Dace Rozenberga, Esther de Smet and Lorna Wilson for being legends; the Lincoln crowd for being wonderfully  welcoming; and a large army of others for making the annual conference yet again a fantastic event. Cheers!

Shiny vs. authentic impact

I spoke at the Research Impact Academy Research Impact Summit (Twitter #RISummit) this week, a fabulous free annual event, make sure to check it out! As a follow up on twitter I was asked by @BellaReichard about my comments on Shiny vs. Authentic case studies. I tried and failed to write a short twitter response, so I’ve expanded here to better express what I mean. Thanks Bella for asking and giving me the impetus to outline my thoughts a little more.

Impact is, at its heart, making a difference through research. But within the sector, formal agendas (such as, but not restricted to REF), generally necessitate curated accounts (eg. impact case studies) which tell the story of successes. These accounts have financial or reputational weighting, ie. the stronger the story, the bigger the win, and are subsequently often also used as the basis of research to ‘understand how impact works’. The REF 2014 impact database has been used fairly extensively for that purpose both within research and within university strategy development.

However, impact is a far more complex, engaged and risk-filled process than these accounts bear witness to. Let’s be frank, it’s in no institution’s interest to say ‘we could’ve had this impact, but XYZ went wrong’, so it’s no criticism in that respect. However, the effect is to continuously present impact as big and ‘shiny’, absent of challenges, and collectively imply that anything falling short of these goliaths ‘isn’t impact’. It’s analogous to the publication bias against null findings, heightening the risk of us repeating mistakes and introducing considerable ethical implications into the research arena

The relative absence of ‘authentic’ accounts of impact – those inclusive of barriers, challenges, misunderstandings, lost opportunities (etc) – compounds this. I’ve seen so many colleagues convinced of their inadequacy, the pointlessness of pursuing smaller effects, and convinced a lack of impact is a failure on them, rather than a consequence of more contextual factors. So much of the sector memory on impact is about ‘what works’, and collectively muting ‘what doesn’t’ stalls our learning, dooms us to repeat misjudgements, and continues to allow individuals to mark themselves against an often unachievable benchmark

Basically, impact isn’t always ‘big and shiny’, despite the wealth of accounts to the contrary, and we need to more fully (authentically) understand it to do it well.

So….if it’s not in the interest of institutions to shout about what goes wrong, and by extension a risk to academics to ‘admit to their failures’, how can we do this? I can’t see it being realistic anytime soon for page-limited case studies to be intoned with the inherent messiness of impact. And perhaps it serves little purpose if you consider case studies to be more like competition entries than comprehensive accounts. So instead, practically, we need to do several things to lift people’s understanding of what impact is/isn’t, stop people being made to feel like a failure, and strengthen our overall connection with society:

  1. Explore, collect and share experiences of ‘what doesn’t work’, valuing the insights these offer instead of fuelling perceptions of ‘failure’
  2. Ensure our research, practical and sector wide discussions of impact take account of the incomplete nature of dominant accounts (ie. recognise shiny case studies only tell one part of the story)
  3. Listen to, and elevate the voices of non-academics about how to connect research with their needs. We will continue to shiny-fy (now a word) impact if we only ever hear from academics.

We have such a wealth of collective learning. Let’s connect it 🙂

Questions from DHP: some responses!

The questions below are a summary of queries raised in the DHP session, with some responses from me 🙂

Is theory building impact?

Impact is the provable benefit of research in the real world. Ie, the effects felt by people, business, the economy, the environment (etc) which arise somehow from our research. The way we get there is varied, connected and can be immediate or take a long time. Applied research tends to be a more direct pathway, for example with interventions being trialled or used by people, seeing benefit pretty much straight away. For research as the more exploratory or basic end of the continuum, the path is invariably more indirect. This kind of research can be analogised as providing the ‘building blocks’ of knowledge for applied research, or providing the first baton pass in the impact marathon. So is theory building impact? Not in the formal definition of impact, no. But is it a vital part of the puzzle? Absolutely yes.

What resources are available for supporting impact planning (and what does a good plan look like)?

There are so many resources now available for impact, a result of how the agenda has cemented and matured across the sector. I’ve put a range of resources on my blog post, but as a quick crib sheet:

A good impact plan is strategic (has a sense of goals and the methods to get there), is rooted in the needs of users (the ‘so what’ aspect), and strikes a balance between an achievable plan without being unreasonably ‘certain’ of what’s possible in a changing environment.

What’s the role of participatory research in impact?

Participatory research is so incredibly valuable for impact. It helps identify the base ‘problem’, shape the research process, identify any necessary ‘course corrections’ throughout the process, and ensure a meaningful line of sight to effects and ways to measure them. Not all research is participatory, so there should be no presumption of precisely what relationship is needed between academics and non-academics, but if your work needs to be ‘used’, it needs people at the heart of it. If you’re starting out, find academics who’re published in the area and follow their work / social media / training events, and look outwards other countries who’ve centralised knowledge mobilisation and co-production (eg. The Co-produced Pathway to Impact Describes Knowledge Mobilization Processes) or broader (non research) good practice for engaging outside of academia (eg. plan, monitor and evaluate participatory methods)

Whose impact counts?

A: I’ve slightly paraphrased this question as in its original form related to tensions between stakeholders and academics in determining what the focus of an intervention should be. There isn’t a single simple answer to this as there’s no single simple way to say whose voice counts most. In any situation there may be a myriad of goals people want to focus on, or think are important, and it’s like to be a process of negotiation and discussion, particularly when you don’t hold all the cards. I’d say always to centre the needs of the main beneficiary (eg patient), and fairly and accurately determine what the intervention could reasonably achieve. It’s all well and good people wanting an intervention to change the world, but if in reality it can only raise awareness/ help build self efficacy, any impact goals outside of that may well need to be achieved by other means.  

How might someone scale up a case study intervention? Should you revisit the ‘problem’, and ascertain if the problem is the same in other settings first?

Simply put, if you’re ‘relocating’ an intervention to another location (eg. another service, community, venue etc), you should sensecheck if the problem and conditions are still a match. This can be light touch, for example speaking with the service manager of the new location, or a heavier duty needs assessment as suits. Checking the ‘problem is still the problem’ means you can repeat the intervention with confidence it’s addressing the right thing. Similarly by checking that the context is as conducive you can avoid unanticipated problems (eg. if your intervention requires gym access and you’ve run version 1 in the middle of a busy city, trying to repeat this as version 2 in a geographically spread rural location may not be as successful)

What top tips would you give for building impact for the next REF, and how do we best engage others who might not be aware of or interested in REF?

For so many of us, REF has been rough, and has left a legacy of a community conflating impact with assessment and hating it as a result. For the next REF we need to do a few things. Firstly, we need to heal from this one (my thoughts here!). Secondly, we need to set in place more supportive, literate and healthy institutional practices to build an inclusive environment. Thirdly, we need to recognise, and help everyone at all levels of the sector recognise, that ‘making a difference’ needs an investment in people, skills and connections with non-academia. Building engagement with impact needs to start with ‘making a difference’, and not with the agendas the oversimplify (or complicate!) what counts.

How do we best present qualitative evidence?

Qualitative evidence is so important – it shows the depth and the meaning of the change. This is almost always strongest a) using the voice of the person who benefitted (eg. quotes), b) articulated with phrasing indicating the nature/direction of the change, and c) connected back to the ‘so what’. The more we, as a community, can convey meaningful change through qualitative data, the more normalised it will be.  

A common issue with interventions (especially tech based) is low usage and high attrition, which may influence efficacy. Any tips?

Thankfully there’s already an awesome paper on this: Beyond Adoption: A New Framework for Theorizing and Evaluating Nonadoption, Abandonment, and Challenges to the Scale-Up, Spread, and Sustainability of Health and Care Technologies

How well has impact been received by academic and non-academic people? What types of challenges are you facing?

Let’s say it’s a mixed bag….! Some academics love impact but hate REF. Some hate impact and can’t see why it should be applied to their research. The most heart breaking bit is when people feel they’re being told their research has no value unless it has impact, or that their impact ‘isn’t enough’. Sure REF might have specific expectations and institutions might have to pick their ‘stars’, but that is fundamentally different to a statement of the value any specific piece of research has. Some research can’t ever reasonably be expected to deliver impact in the way it’s so often simply conceived. A minor soapbox moment – notwithstanding the amazing work of those whose work is showcased in case studies, too many people are feeling inadequate because of the myths and unchecked assumptions about impact, and that can’t be right. Non-academics have been unbelievably helpful, and the REF agenda has engineered an academic community more ‘primed’ to find better ways to connect with them. But it remains a challenge to do this without placing such a burden (providing evidence) on them to sour relationships.

Chasing the ‘impact unicorn’ – myths and methods in demonstrating research benefit’.

An earlier version of this post appeared on the National Institute for Health Research (NIHR) blog

Whilst academics and clinicians alike are well aware of the need to ‘make research useful’, formal expectations around impact have pushed us to assume only large scale effects are ‘worthy’.

With continued pressure to secure funding and ‘do more with less’, assessment driven thinking and impact measures such as the Research Excellence Framework 2014/2021 risk overshadowing the most basic of principles – that research – of any type, scale, or subject can do good in the world.

NIHR has always been anchored in improving patient care and wellbeing, and so investigators have a genuine opportunity to connect research with patient benefit. The challenge is how can this be done? How do we get back to basics in this pressured environment? In my experience as an academic, impact lead and formerly Association of Research Managers and Administrators (ARMA) impact champion, there are numerous unhelpful myths which derail impact. So let’s rebuild.

First the myths……

Myth 1: Impact is something big which happens at (or beyond) the end of a research project.

No. Impact is a change, irrespective of its size, nature or timing. Impact is the provable benefit of research in the real world. Of course we want the biggest and best effects we can get, but if we only gaze at a longer term fantasy we’re blinkered to the smaller, stepwise changes that get us there. We need to reset our thinking to recognise the value of those necessary milestones such as improved clinician knowledge and skills) which pave the way to something bigger including improved accuracy of diagnosis and treatment). Unless we focus on realistic steps, we will forever chase an elusive impact unicorn.

Myth 2: Only applied research has impact

Compared to applied research, fundamental research undoubtedly requires several more steps in the translational chain before it reaches impact. However, even though it can take many years to mature, such research often starts an impact marathon with multiple baton passes: new knowledge may be cited by those in another discipline, which forms the basis for a new method, which is integrated into a new technique, which is trialled in practice and so on. The challenge (and opportunity) is to map those forward steps.

Myth 3: You can’t plan impact

It’s true impact cannot be templated. Analysis of REF case studies showed over 3,700 distinct impact pathways, proving there’s no one size fits all. However, it isn’t true it can’t be planned. Whilst there is always the possibility of unexpected impact, planning impact can help us to identify:

  • What effects are possible, most appropriate, when they may happen and what measures or indicators might be used (eg.Patient Reported Outcome Measures (PROMS))
  • Stakeholders, including public and patient involvement
  • Identifying risks to getting research into practice – what regulatory hurdles need to be overcome? Who might object to the work? How likely will the research enter the care pathway?

Towards opportunities….

As the sector’s impact learning curve accelerates, two key opportunities for strengthening our impact are clear:

Opportunity 1: Building impact literacy

The opportunity for all those involved in health-related research is to become impact literate. That is, to understand what impact the research may have and for who, how research can be mobilised to action, and what skills are needed to make this happen. More fundamentally thinking about impact needs to start from ‘why’, understanding the meaning, purpose and ethics which should lead decisions about impact possibilities.

Since first publishing on impact literacy in 2017, impact has been cemented further into research consciousness, and it’s clear that deeper understanding is needed at both the individual and institutional levels. Earlier this year we published a new impact literacy paper, detailing both individual and organisational dimensions, alongside how levels of impact literacy can be developed. The new model is shown in Figure 1 below.

Figure 1: Revised model of Impact literacy (2019*, updated from 2017)New IL diagram

Opportunity 2: Building competencies

Alongside developing understanding we must develop skills. Impact doesn’t just happen, people make it happen. This process of translating research into tangible effects takes effort, and professional development is crucial for strengthening impact across the research community.

………………..

So let’s return to basics. Impact is a change, of whatever magnitude, type or flavour. It is the shorthand for ‘doing good from research’ and depends on us thinking about the chains, connections and people between research and effects. We can empower ourselves with the skills and understanding to judge how impact best works for our research, and develop fair, measured and proportionate expectations.

Ask yourself: how can you make impact fantasy into reality?

 

*Bayley, J and Phipps, D (2019). Extending the concept of research impact literacy: levels of literacy, institutional role and ethical considerations [version 1; peer review: 1 approved] Emerald Open Research, 1:14 (https://doi.org/10.12688/emeraldopenres.13140.1)

Notes from the BPS Northern Ireland branch conference

Thanks to all those who came to the impact literacy session at the BPS Northern Ireland conference (April 2019). References to everything discussed in the talk are below (selected slides to follow!).

IMPACT LITERACY AND SKILLS

Impact literacy workbook and Impact Institutional Healthcheck available at https://www.emeraldpublishing.com/resources/

Bayley, J.E. and Phipps, D. (2017) Building the concept of research impact literacy. Published online in Evidence & Policy Available online http://www.ingentaconnect.com/content/tpp/ep/pre-prints/content-ppevidpold1600027r2

Bayley, J.E, Phipps, D., Batac, M. and Stevens, E. (2017) Development and synthesis of a Knowledge Broker Competency Framework. Evidence and Policy. Available online https://doi.org/10.1332/174426417X14945838375124 (OA version: https://pure.coventry.ac.uk/ws/portalfiles/portal/7270403/PRE_REVIEW_Knowledge_Broker_competencies_for_repository_OPEN.pdf)

REF

REF 2014 impact case study database – http://impact.ref.ac.uk/CaseStudies/

REF 2021 guidelines – http://www.ref.ac.uk/publications/2018/draftguidanceonsubmissions201801.html

MODELS AND FRAMEWORKS

Buxton, M., & Hanney, S. (1996). How can payback from health services research be assessed? Journal of Health Services Research, 1(1), 35-43

Donovan, C. and Hanney, S., 2011. The ‘payback framework’explained. Research Evaluation, 20(3), pp.181-183. Available at http://jonathanstray.com/papers/PaybackFramework.pdf

Phipps, D.J., Cummings, J. Pepler, D., Craig, W. and Cardinal, S. (2015) The Co-Produced Pathway to Impact describes Knowledge Mobilization Processes . J.Community Engagement and Scholarship. See http://jces.ua.edu/the-co-produced-pathway-to-impact-describes-knowledge-mobilization-processes/

Michie, S. Atkins, L, and West, R. (2014). The Behaviour Change Wheel: A Guide to Designing Interventions. London: Silverback Publishing. See www.behaviourchangewheel.com

Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50, 179-211. Further information available at http://people.umass.edu/aizen/tpb.diag.html

Bartholomew-Eldredge, L.K., Markham, C.M., Ruiter, R.A., Kok, G. and Parcel, G.S., 2016. Planning health promotion programs: an intervention mapping approach. John Wiley & Sons. Further information at https://interventionmapping.com/

Craig, P., Dieppe, P., Macintyre, S., Michie, S., Nazareth, I., & Petticrew, M. (2008). Developing and evaluating complex interventions: the new Medical Research Council guidance. British Medical Journal, 337, a1655 Available online https://mrc.ukri.org/documents/pdf/complex-interventions-guidance/ NB UPDATED GUIDANCE WILL BE OUT IN 2019

MY BLOGS

Avoiding imposter syndrome and impact

Chasing the impact unicorn

(Impact) life beyond REF

BROADER READING AND RESOURCES

Responsible metrics: www.responsiblemetrics.co.uk

Open Access via Unpaywall add on : unpaywall.org

CASRAI (information standards) https://casrai.org/

Analysing REF case studies: https://www.kcl.ac.uk/sspp/policy-institute/publications/Analysis-of-REF-impact.pdf

London School of Economics blog http://blogs.lse.ac.uk/impactofsocialsciences/

Evidence and Policy journal  https://policypress.co.uk/journals/evidence-and-policy

Research Evaluation journal  https://academic.oup.com/rev/