Internal impact reviews – what is institutional good practice?
We are half way through the current REF cycle, with the submission deadline for REF 2021 likely to be just over three years away. HEFCE are reviewing their consultation documents, and we expect the first indications of how they intend to implement the Stern review in the next month or so. Right now, HEIs are beginning to pay more attention to REF preparation. A new tranche of REF and impact managers are being appointed and a new ARMA Special Interest Group (SIG) on the REF has just started up. As part of this flurry of activity, organisations are developing different ways to prepare, institutionally and as individuals, for assessment.
Most institutions now conduct internal assessment of outputs on a regular or ongoing basis and many are also looking at how to assess impact. Reviewing impacts and outputs present different challenges. Once an output is published, it is in the same form as it will be once the submission deadline comes around. Barring debates about possible UoA changes, most outputs can be internally reviewed, or with the help of critical friends (or ‘external advisors’), to give research leaders and department heads a view of what strong outputs look like. Impact case studies, by contrast, are rarely ‘finished’. Research may be completed, but once it’s out in the wild it has a life of its own and the effects often roll on, seen or unseen.
So, what are the options for reviewing progress towards a strong impact submission? All of those we’ve heard about so far start from identification of areas of likely strong impact for each UoA. Once these broad areas have been identified, approaches to interim assessment differ. We categorise these as Completeness only, Mock-REF, and Middle way.
Completeness only
Some institutions focus on completeness of a possible case, looking to make sure that the (likely, as we don’t have the final rules yet) threshold issues are met. This means ensuring there is underpinning research (noting a possible ‘body of work’ change) of rigorous nature. Many institutions are currently playing it safe and expecting that a reassurance of quality research will still be looked for and so require some assurances that the underpinning research is of 2* quality at least. Enough research must have happened at the submitting institution (a ‘distinct and material’ contribution in the words of REF 2014) as it is expected that portability of impact will still be prohibited, and this must have happened within the past 20 years. Some sort of impact must be taking place within the window, although this is not formally assessed in this type of approach.
Mock-REF
This is the full monty approach. Case studies are drafted and evidence collected. All the components of a case study are put together for assessment by a REF-type panel. The panel usually consists of internal staff with an interest in or experience of REF, sometimes combined with external critical friends. This requires a lot of work. Evidence needs collecting, panels assembling, and case studies will have to be amended on an on-going basis. In this situation researchers should be provided with support and or training to write the case. If case studies are graded, panels might also need training, or at least given very good guidance about what makes a good case study.
At this stage, there should be an element of formative assessment as well as any summative grading. Formative assessment can then lead to specific additional resources, such as training, mentoring, funds, or research leave where necessary and possible. It is important not to write off early-stage cases too early. If grades are given, it might be useful to temper that with some sort of risk measurement as well.
Although this approach is hard work and time consuming, it does give the best all-round picture and shows researchers what it takes to build a case study. Risks about giving a star rating could be marginally mitigated by doing this assessment on a University or at least Panel basis, not UoA by UoA, so there are more numerous types of impact to compare with.
The middle way
There are various middle way assessment exercises that Universities are carrying out. These include narrative reports from impact leads in departments and interviews carried out by impact staff to formulate impact case lists. These varied approaches can respond to different requirements in different UoAs, for example where there is a new submission or fast-growing department. However, it can put a heavy onus on research impact staff and this can detract from other, broader tasks such as training. Also, it doesn’t spread learning widely across the institution and brings risks for the institution if the impact staff move on.
So, what should institutions looks to assess at this stage? Where possible, an understanding of likely reach and significance can help with allocation of resources, and this should be combined with an assessment of specific need. An assessment of risk to promising cases can be valuable, such as identifying cases that rely on a single member of staff or have an assiduous post-doc on a precarious contract.
Other risks can involve neglected key relationships or very specific windows of opportunity that must be capitalised on. Any type of interim assessment should then develop a plan of action, ideally for each case but also for departments or UoAs. Specific types of support should be developed to meet the needs of those working on cases, from peer learning networks to specalised training, support of specialist consultants, workload planning, or support for evidence collection, tracking and collation.
To look in more depth at these challenging issues, we’re holding a webinar on 8 September. This will be an online discussion with three impact experts from Universities that are taking different approaches to impact monitoring and assessment. It will then be opened out for comment and questions from the audience. This one-hour, intensive session is aimed at supporting those who are developing impact cases for the next REF and will address questions such as:
- What are the options to consider when structuring an impact review?
- What support do researchers need in the process?
- How do you account for risk in developing cases?
- Should you use a panel? If so how?
Get In Touch
+44 (0) 7957814344+44 (0) 7957814344 hello@vertigoventures.com